89 research outputs found

    Signed Network Modeling Based on Structural Balance Theory

    Full text link
    The modeling of networks, specifically generative models, have been shown to provide a plethora of information about the underlying network structures, as well as many other benefits behind their construction. Recently there has been a considerable increase in interest for the better understanding and modeling of networks, but the vast majority of this work has been for unsigned networks. However, many networks can have positive and negative links(or signed networks), especially in online social media, and they inherently have properties not found in unsigned networks due to the added complexity. Specifically, the positive to negative link ratio and the distribution of signed triangles in the networks are properties that are unique to signed networks and would need to be explicitly modeled. This is because their underlying dynamics are not random, but controlled by social theories, such as Structural Balance Theory, which loosely states that users in social networks will prefer triadic relations that involve less tension. Therefore, we propose a model based on Structural Balance Theory and the unsigned Transitive Chung-Lu model for the modeling of signed networks. Our model introduces two parameters that are able to help maintain the positive link ratio and proportion of balanced triangles. Empirical experiments on three real-world signed networks demonstrate the importance of designing models specific to signed networks based on social theories to obtain better performance in maintaining signed network properties while generating synthetic networks.Comment: CIKM 2018: https://dl.acm.org/citation.cfm?id=327174

    Relational Multi-Task Learning: Modeling Relations between Data and Tasks

    Full text link
    A key assumption in multi-task learning is that at the inference time the multi-task model only has access to a given data point but not to the data point's labels from other tasks. This presents an opportunity to extend multi-task learning to utilize data point's labels from other auxiliary tasks, and this way improves performance on the new task. Here we introduce a novel relational multi-task learning setting where we leverage data point labels from auxiliary tasks to make more accurate predictions on the new task. We develop MetaLink, where our key innovation is to build a knowledge graph that connects data points and tasks and thus allows us to leverage labels from auxiliary tasks. The knowledge graph consists of two types of nodes: (1) data nodes, where node features are data embeddings computed by the neural network, and (2) task nodes, with the last layer's weights for each task as node features. The edges in this knowledge graph capture data-task relationships, and the edge label captures the label of a data point on a particular task. Under MetaLink, we reformulate the new task as a link label prediction problem between a data node and a task node. The MetaLink framework provides flexibility to model knowledge transfer from auxiliary task labels to the task of interest. We evaluate MetaLink on 6 benchmark datasets in both biochemical and vision domains. Experiments demonstrate that MetaLink can successfully utilize the relations among different tasks, outperforming the state-of-the-art methods under the proposed relational multi-task learning setting, with up to 27% improvement in ROC AUC.Comment: ICLR 2022 Spotligh

    AutoTransfer: AutoML with Knowledge Transfer -- An Application to Graph Neural Networks

    Full text link
    AutoML has demonstrated remarkable success in finding an effective neural architecture for a given machine learning task defined by a specific dataset and an evaluation metric. However, most present AutoML techniques consider each task independently from scratch, which requires exploring many architectures, leading to high computational cost. Here we propose AutoTransfer, an AutoML solution that improves search efficiency by transferring the prior architectural design knowledge to the novel task of interest. Our key innovation includes a task-model bank that captures the model performance over a diverse set of GNN architectures and tasks, and a computationally efficient task embedding that can accurately measure the similarity among different tasks. Based on the task-model bank and the task embeddings, we estimate the design priors of desirable models of the novel task, by aggregating a similarity-weighted sum of the top-K design distributions on tasks that are similar to the task of interest. The computed design priors can be used with any AutoML search algorithm. We evaluate AutoTransfer on six datasets in the graph machine learning domain. Experiments demonstrate that (i) our proposed task embedding can be computed efficiently, and that tasks with similar embeddings have similar best-performing architectures; (ii) AutoTransfer significantly improves search efficiency with the transferred design priors, reducing the number of explored architectures by an order of magnitude. Finally, we release GNN-Bank-101, a large-scale dataset of detailed GNN training information of 120,000 task-model combinations to facilitate and inspire future research.Comment: ICLR 202
    • …
    corecore